Nature Methods
○ Springer Science and Business Media LLC
Preprints posted in the last 90 days, ranked by how well they match Nature Methods's content profile, based on 336 papers previously published here. The average preprint has a 0.37% match score for this journal, so anything above that is already an above-average fit.
Abbey, A.; Meroz, Y.
Show abstract
Quantitative studies of plant growth and environmental responses increasingly rely on time-series imaging, yet automated segmentation remains challenging due to continuous growth, large non-rigid morphological change, and frequent self-occlusion. Traditional image-processing pipelines and taskspecific deep learning models often require extensive annotated datasets and retraining, limiting portability across species, developmental stages, and imaging conditions. Here we present SAP (Segment Any Plant), a plant-focused framework that leverages the pretrained Segment Anything Model 2 (SAM2) to enable few-shot, training-free segmentation of plant timeseries imagery. SAP integrates interactive prompting, automated temporal mask propagation, and centerline extraction within a web-based interface, allowing users to move from raw images to quantitative descriptors of organ shape and dynamics without programming expertise. Across multiple systems, including Arabidopsis thaliana rosette development, root growth, sunflower gravitropism, and confocal root microscopy, SAP achieves high segmentation accuracy (mean IoU 0.890.93) and sub-pixel centerline precision from single-frame prompting. By reducing the need for task-specific retraining, SAP provides a transferable framework for reproducible time-series phenotyping across diverse experimental contexts.
Qiao, Y.; Wang, J.; Xi, J.; Ding, J.; Chen, T.; Zhang, Y.; Qiu, L.; Zhao, W.; Liu, J.; Xu, F.
Show abstract
Deciphering the spatial organization of macromolecular complexes in their native context is central to structural biology. Particle fusion in single-molecule localization microscopy offers a unique capability for high-resolution structural reconstruction in situ. However, existing methods face significant challenges from large rotational perturbations and sparse labeling, resulting in compromised accuracy and substantial computational cost. We present DeepSRFusion, a self-supervised pretraining framework for three-dimensional super-resolution particle fusion. By representing single-molecule point clouds as Gaussian Mixture Models, DeepSRFusion integrates data-driven feature learning with physical imaging constraints. A two-stage optimization strategy with dynamic template updating enhances robustness, and a novel Clustering Error metric quantifies fusion quality. Nanometer-scale validation on both simulated and experimental datasets demonstrates high reconstruction fidelity and structural consistency with cryo-electron microscopy and AlphaFold3. DeepSRFusion remains effective under challenging imaging conditions, including large 3D rotations, sparse labeling, high localization uncertainty, and limited particle numbers, while achieving over 100-fold speedups compared to current methods. It resolves fine structural features with a measured spatial resolution of 1.60 {+/-} 0.10 nm, sufficient to distinguish ~10 nm spaced protein pairs and visualize tilted internal substructures within macromolecular assemblies. DeepSRFusion provides a powerful tool for high-precision structural analysis in native cellular environments.
Zhou, X.; Wang, S.
Show abstract
Deep learning can extract quantitative measurements from microscopy images that are inaccessible to classical analysis, but developing these models requires machine learning expertise that most imaging scientists do not have. Here we present a framework in which a researcher describes their microscopy problem to a large language model (LLM) agent in under ten minutes of conversation--specifying what they image, what they want to measure, and what success looks like--and the agent autonomously handles the rest: designing physics-based training data, implementing a neural network, training, diagnosing failures, and iterating without human intervention. A researcher can start the agent before leaving the lab; overnight, it tests tens to a hundred model variations, each one an experiment that would otherwise demand active attention. We validate the framework across six microscopy modalities and four problem types. On the BBBC039 nuclear segmentation benchmark, the agent autonomously trains a U-Net with 3-class semantic segmentation and morphological post-processing, achieving pixel-level Dice of 0.97 and object-level F1 of 0.84--within 7% of the published baseline--while diagnosing a data pipeline bug that no amount of hyperparameter tuning could resolve. On single-protein holographic microscopy, the agent reads a published paper, designs a simulator, and develops an optimized model in a single session. On PatchCamelyon histopathology classification, the agent autonomously evolves through four optimization phases--from scratch training through transfer learning and regularization to inference-time ensembling--completing 97 iterations on 262,144 images to reach 89.3% test accuracy and 96.3% AUC, nearly matching the published rotation-equivariant baseline. This framework enables microscopy researchers to use deep learning-based image analysis without machine learning domain knowledge.
Stringer, C.; Ki, C.; Del Grosso, N.; LaFosse, P.; Zhang, Q.; Pachitariu, M.
Show abstract
Neural recordings using optical methods have improved dramatically. For example, we demonstrate here recordings of over 100,000 neurons from the mouse cortex obtained with a standard commercial microscope. To process such large datasets, we developed Suite2p, a collection of efficient algorithms for motion correction, cell detection, activity extraction and quality control. We also developed new approaches to benchmark performance on these tasks. Our GPU-accelerated non-rigid motion correction substantially outperforms alternative methods, while running over five times faster. For cell detection, Suite2p outperforms the CNMF algorithm in Caiman and Fiola, finding more cells and producing fewer false positives, while running in a fraction of the time. We also introduce quality control steps for users to evaluate performance on their own data, while offering alternative algorithms for specialized types of recordings such as those from one-photon and voltage imaging.
Magana, S.; Zhao, W.; Dao Duc, K.
Show abstract
Inferring continuous morphological transformations from collections of static biological snapshots is an important, yet challenging problem. In the context of cellular biology, prevailing approaches reduce 3D shape collections to static reconstructions or hand-crafted descriptors, which fail to capture smooth, multi-dimensional transitions. We present MorphCurveVAE, a two-stage pipeline for constructing continuous morphological trajectories from sets of static, segmented 3D microscopy images. Stage 1 learns a smooth, compact latent manifold of volumetric morphologies using a multi-branch convolutional variational auto-encoder (VAE) that can encode multiple correlated substructures into disentangled subspaces. Stage 2 extracts a constrained, topologically-aware principal curve through the augmented latent space to produce directional and correlated trajectories of structural dynamics. To demonstrate our framework, we apply MO_SCPLOWORPHC_SCPLOWCO_SCPLOWURVEC_SCPLOWVAE to a large public dataset (Allen Institute WTC-11) of segmented volumetric cell and nucleus images spanning the mitosis cycle. Our results indicate high-quality reconstructions, low projection errors to the fitted principal curve, and biologically and visually plausible continuous animations. These results suggest MorphCurveVAE as a practical tool for modeling biological morphological trajectories, while remaining broadly applicable to other biological imaging domains where time-resolved observations are unavailable.
Oshinjo, A.; Wu, J.; Petrov, P.; Izzi, V.
Show abstract
Despite the wide adoption of spatial transcriptomics (ST) into the biomedical community, its practical use remains constrained by a fundamental resolution-coverage trade-off and by reliance on computationally intensive and static workflows. As a result, transcriptome-wide spatial data are typically interpreted as ad-hoc processed outputs rather than explored dynamically as one would do with stained or fluorescence tissue images, limiting ST accessibility and slowing biological insight. Here we introduce NaVis, a web-based virtual microscopy framework that redefines how spatial transcriptomics is experienced. NaVis enables near-real-time, on-demand super-resolution inference from low-resolution whole-transcriptome platforms (10x Genomics Visium V1/V2, Cytassist and VisiumHD), generating high-resolution reconstructions that approach microscopy-level detail while preserving transcriptome-wide coverage. Unlike conventional interpolation approaches that produce fixed images, NaVis computes and refines spatial reconstructions interactively as users navigate tissue sections, transforming resolution from a platform-imposed constraint into a dynamic, user-controlled parameter. Also, NaVis is delivered through a fully point- and-click browser interface requiring no coding expertise, thus removing computational mediation and allowing clinicians, pathologists and experimental researchers to directly interrogate spatial molecular architecture. By coupling high-resolution inference with immediate visual interaction, NaVis shifts spatial transcriptomics from a static computational analysis to an exploratory, microscopy-like modality, broadening both its accessibility, conceptual reach, and potential for biological discoveries.
Wei, Z.; Curtin, I.; Kyere, F. A.; Borland, D.; Yi, H.; Kim, M.; Dere, M.; McCormick, C. M.; Krupa, O.; Shih, Y.-Y. I.; Zylka, M. J.; Stein, J. L.; Wu, G.
Show abstract
Advances in tissue clearing and light-sheet microscopy enable cellular resolution whole-brain 3D imaging. However, whole-brain quantification tools do not yet meet demands for efficiency or assess morphometry. Here we present CellPheno, a 3D nuclei instance segmentation framework for high-throughput cellular phenotyping. CellPheno quantifies an entire P4 mouse brain within 15 hours. We showcase whole-brain morphometry, enhanced stitching, and co-localization across multiple cell types in 53 brains.
Munoz-Barrutia, A.; Lachowski, D.; Rey-Paniagua, G.
Show abstract
Live-cell microscopy restoration is constrained by a trade-off between inference latency and texture preservation. While diffusion models provide high textural fidelity, the computational cost of iterative sampling currently limits their use in low-latency instrument feedback loops. Here, we present NAFNet GAN, a restoration framework that couples an activation-free backbone with a perceptual adversarial objective to enable high-throughput analysis. Unlike diffusion architectures, NAFNet GAN achieves an inference latency of[~] 110 ms for 1024 x 1024 inputs, potentially suitable for real-time instrument feedback loops. Across eight datasets ranging from STED nanoscopy to histopathology, the method achieves the lowest Learned Perceptual Image Patch Similarity (LPIPS) scores in 7 of 8 benchmarks while preserving structural coherence (e.g., MS-SSIM > 0.968 in Cryo-EM), which facilitates reliable downstream analysis. Supported by performance benchmarks in the AI4Life Denoising Challenge, NAFNet GAN restores structural features from low-photon-budget acquisitions, maintaining the temporal resolution required for dynamic live-cell workflows.
Feng, Y.; Robers, Z.; Rasheed, L.; Miao, Y.; Wen, S.; Lee, K.; Sohigian, J.; Brbic, M.; Hickey, J. W.
Show abstract
Spatially resolved omics technologies reveal tissue organization at single-cell resolution but remain limited by the cost of the assays, incomplete spatial coverage, 2D-only imaging, and experimental artifacts. These factors motivate the need for in silico methods that can reconstruct or extend tissue context beyond what current spatial measurements provide. We present MORPHE (MOdeling of stRuctured sPatial High-dimensional Embeddings), an AI framework that learns to synthesize biologically faithful tissue architecture directly from spatial-omics data. MORPHE introduces a graph-informed probabilistic embedding that maps discrete cell identities and their spatial relationships into a continuous RGB-like latent space compatible with diffusion modeling. This representational bridge enables spatial cellular maps to leverage large pre-trained image-generative models while preserving biological interpretability upon decoding. By modeling cells as the fundamental units of generation and learning how their identities and spatial relationships collectively give rise to large-scale tissue structure, MORPHE enables generation and reconstruction of tissue architecture at single-cell resolution. We applied the method across large-scale single-cell proteomic datasets from the intestine and single-cell transcriptomic datasets from the brain, showing computational scalability acrosss millions of cells. We used MORPHE on these datasets to outpaint beyond experimentally restricted fields of view, inpaint missing or experimentally damaged tissue regions, and perform cross-tissue imputation, connecting separated tissue regions into a single contiguous sample in both 2D and 3D. MORPHE represents a new class of tissue generation algorithms that will help solve current limitations and challenges with single-cell spatial-omics datasets.
Lüthi, J.; Cerrone, L.; Comparin, T.; Hess, M.; Hornbachner, R.; Tschan, A.; Glasner de Medeiros, G. Q.; Repina, N. A.; Cantoni, L. K.; Steffen, F. D.; Bourquin, J.-P.; Liberali, P.; Pelkmans, L.; Uhlmann, V.
Show abstract
The rapid growth in microscopy data volume, dimensionality, and diversity urgently calls for scalable and reproducible analysis frameworks. While efforts on the open OME-Zarr format have helped standardize the storage of large microscopy datasets, solutions for standardized processing are still lacking. Here, we introduce two complementary contributions to address this gap: 1) the Fractal task specification, defining OME-Zarr processing units that can interoperate across computational environments and workflow engines, and 2) the Fractal platform, using this specification to enable scalable and modular OME-Zarr-native analysis workflows. We demonstrate their use across diverse biological research data, including terabyte-scale multiplexed, volumetric, and time-lapse imaging. In a clinical setting, we show that Fractal workflows achieve near-identical quantification of millions of cells across independent deployments, demonstrating the reproducibility required for translational applications. With its growing community of contributors, the Fractal ecosystem provides a foundation for FAIR microscopy image analysis relying on open file formats.
Chen, K.; Chen, Z.; Zheng, D.; Fang, X.; Liang, J.; Li, Z.; Chen, Y.; Zou, J.; Cai, B.; Chen, S.; Huang, K.
Show abstract
Computational methods have advanced the analysis of animal behavior, yet significant challenges remain in data standardization, analytical reproducibility, and workflow integration. Existing computational solutions often demand extensive programming proficiency or compel users to navigate a highly fragmented ecosystem of disconnected tools for tracking, statistical analysis, and visualization. Here, we present EthoClaw, an open-source, artificial intelligence-driven workflow platform built upon the OpenClaw agentic framework, functioning as a locally deployable AI assistant for behavioral research. EthoClaw provides an integrated computational infrastructure that seamlessly bridges the gap between raw behavioral video acquisitions and publishable scientific results. In this study, we demonstrate the platforms capacity to natively ingest video data via a dual-mode tracking architecture: utilizing ultra-fast image processing for rapid object detection, and leveraging the SuperAnimal methods for precise, markerless postural tracking. To ensure maximal interoperability, EthoClaw automatically converts various tracking data formats into DeepLabCut-compatible formats, enabling high-throughput phenotyping by generating publication-quality visualizations alongside rigorous multidimensional statistical profiling. Furthermore, the platform incorporates a large language model (LLM)-driven reporting module that dynamically synthesizes analytical documents, ensuring methodological transparency. Through an open field test, we validate the practical usability of EthoClaw while accelerating computational throughput by localizing heavy video processing to circumvent cloud bandwidth bottlenecks. Operating via an omnichannel natural language interface that integrates seamlessly with ubiquitous instant messaging software, EthoClaw democratizes advanced computational behavioral analysis, offering a holistic, highly efficient ecosystem that enforces experimental reproducibility and open science principles.
Kondratyev, I.; Sun, W.
Show abstract
AI coding assistants excel at software tasks but lack structured access to laboratory hardware, the physical instruments that define experimental science. We present AO_SCPLOWTARAXISC_SCPLOW, an open-source framework that provides hardware control capabilities spanning camera acquisition, microcontroller communication, precision timing, and inter-process coordination, while exposing these capabilities to AI agents through Model Context Protocol (MCP) servers and domain-specific skills. Critically, AO_SCPLOWTARAXISC_SCPLOW separates configuration-time AI assistance from runtime data acquisition, ensuring that experiments run deterministically regardless of AI service availability. We validate this architecture in a two-photon imaging and virtual reality rodent behavior platform, demonstrating up to order-of-magnitude reductions in hardware validation, integration, and personnel onboarding time. By bridging the gap between AI software capabilities and physical instrument control, AO_SCPLOWTARAXISC_SCPLOW offers a reusable blueprint for AI-assisted scientific instrumentation across experimental disciplines. All code is available at github.com/Sun-Lab-NBB/ataraxis.
Steen, P. R.; Masullo, L. A.; Pachmayr, I.; Kowalewski, R.; Heinze, L.; Steinek, C.; Kwon, J.; Honsa, M.; Reinhardt, S. C.; Grabmayr, H.; Jungmann, R.
Show abstract
State-of-the-art super-resolution microscopy enables nanometer-resolution imaging of proteins, but its multiplexing capacity has been fundamentally constrained. Here, we present Combi-PAINT, a combinatorial DNA barcoding strategy based on DNA-PAINT, that facilitates superlinear scaling of multiplexing with the number of imaging rounds. Instead of assigning one unique docking strand per target, Combi-PAINT encodes targets as combinations of orthogonal sequences, allowing nonlinear scaling of target number with imaging rounds. Using just six imaging rounds, we resolve 41 targets in a field-of-view of 100 x 100 m2 with [~]2.5 nm localization precision and [~]90% decoding accuracy in under 30 minutes, representing the fastest sub-10 nm super-resolution microscopy acquisition to date. We benchmark decoding fidelity and demonstrate robust in situ performance in mammalian cells, achieving 97% decoding accuracy. Combi-PAINT is compatible with existing DNA-PAINT workflows and speed-enhancing techniques, offering a scalable, accessible platform for high-content, single-molecule spatial proteomics.
Inecik, K.; Erken, E.; Theis, F. J.
Show abstract
MotivationReproducibility in computational biology fails silently when gene identifiers drift beneath unchanged analysis code: the same frozen pipeline, rerun months later, yields different results not because biology evolved but because identifier semantics shifted with upstream annotation releases--a failure mode invisible to version control and containerization because the mapping layer itself constitutes an undeclared coordinate system whose time axis advances independently of downstream workflows. Gene identifiers occupy positions in a joint space of namespace, annotation release, genome assembly, and entity layer that evolves through retirements, merges, splits, and nomenclature reassignments, so atlas integration, retrospective reanalysis, and perturbation screens inherit temporal dependencies that existing utilities cannot surface: current mappers answer what an identifier resolves to now rather than under what declared contract the feature space was constructed. ResultsIDTrack reconceptualizes identifier harmonization as a time-indexed coordinate transformation by materializing annotation release history into a snapshot-bounded identifier graph and solving conversions through a time-traveling, contract-constrained pathfinder that pins release boundaries, assembly contexts, and ambiguity policies as explicit parameters rather than implicit endpoint state. This architecture surfaces reachability and ambiguity as interpretable outcome classes--unmapped, uniquely resolved, or ambiguously multi-target--enables atlas-scale harmonization with explicit collision handling, and records every mapping decision in a provenance ledger that transforms invisible preprocessing into citable methodological infrastructure whose coordinate choices can be inspected, compared, and reproduced rather than lost as ephemeral preprocessing. Availability and ImplementationCode: https://github.com/theislab/idtrack; package: pip install idtrack. Contactkemal.inecik@helmholtz-munich.de; erkmenerken22@ku.edu.tr; fabian.theis@helmholtz-munich.de. Supplementary InformationSupplementary material elaborates on architectural decisions and implementation details.
Chourrout, M.; Keenlyside, A.; Wanjau, E.; Balbastre, Y.; Yagis, E.; Brunet, J.; Stansby, D.; Engel, K.; Gui, X.; Thoennissen, J.; Dickscheid, T.; Lamalle, L.; Bellier, A.; Vivekananda, U.; Tafforeau, P.; Lee, P. D.; Walsh, C. L.
Show abstract
We present an isotropic 7.72 {micro}m/voxel post-mortem human brain dataset acquired using Hierarchical Phase-Contrast Tomography (HiP-CT) at the ESRF Extremely Brilliant Source, beamline BM18. This fills a critical gap between whole-brain MRI at 100 {micro}m resolution and serial-section histological reconstructions at 20 {micro}m or finer. HiP-CT contrast, derived from X-ray phase shifts, enables rich 3D visualisation of complex neuroanatomy including white-matter bundles, microvasculature, and sub-nuclei. We provide open-source workflows for online data exploration, subvolume download, segmentation, and reintegration of analyses into the full dataset. We demonstrate the potential of this resource by tracing vasculature over long distances, segmenting nuclei, and extracting whitematter orientations with 3D structure-tensor analysis. High-resolution human brain datasets are transformative for quantitative neuroanatomy, circuit mapping, and validation of clinical imaging; this openly available resource is a critical step for global access to next-generation multiscale brain imaging.
Ramirez-Aportela, E.; Zarrabeitia, O. L.; Fonseca, Y. C.; Ceska, T.; Subramaniam, S.; Carazo, J.-M.; Sorzano, C. O. S.
Show abstract
Cryogenic Electron Microscopy (cryo-EM) has transformed structural biology by enabling high-resolution reconstruction of macromolecular complexes from noisy projection images. However, the intrinsic heterogeneity and low signal-to-noise ratio of cryo-EM datasets make 2D classification a critical and computationally demanding step in the processing work-flow. Here, we introduce AlignPCA-2D, a PCA-space Euclidean vector alignment method for fast, interpretable 2D classification in cryo-EM. By projecting particle images and class representations into a compressed latent PCA space, AlignPCA-2D reduces data dimensionality while pre-serving meaningful structural variability. The image-to-class assignment is then performed using Euclidean distance, enabling efficient and accurate classification. We benchmark AlignPCA-2D against established cryo-EM software, such as RELION and cryoSPARC, and demonstrate that it achieves competitive alignment accuracy while substantially reducing computational cost. This approach provides a lightweight alternative for large-scale 2D classification tasks, and its modular design makes it compatible with existing cryo-EM processing pipelines. O_TBL View this table: org.highwire.dtl.DTLVardef@1e384org.highwire.dtl.DTLVardef@2474org.highwire.dtl.DTLVardef@15966c1org.highwire.dtl.DTLVardef@6934adorg.highwire.dtl.DTLVardef@1017b4e_HPS_FORMAT_FIGEXP M_TBL O_FLOATNOTable 2.C_FLOATNO O_TABLECAPTIONParticle retention and overlap among 2D classification methods C_TABLECAPTION C_TBL
Hua, X.; Han, K.; Ling, Z.; Reid, O.; Gao, Z.; Zhang, H.; Botchwey, E.; Forghani, P.; Liu, W.; Sawant, M. A.; Radmand, A.; Kim, H.; Dahlman, J. E.; Kesarwala, A.; Xu, C.; Jia, S.
Show abstract
The rapid convergence of advanced microscopy and deep learning is transforming cell biology by enabling imaging systems in which optical encoding and computational inference are jointly optimized for volumetric information capture and interpretation. However, broadly accessible three-dimensional imaging at high spatiotemporal resolution remains constrained by volumetric reconstruction throughput, susceptibility to artifacts, and the burden of collecting modality-matched training data. Here, we introduce PAVR, a physics-aware light-field imaging platform that integrates single-shot volumetric acquisition with fast, end-to-end volumetric reconstruction. PAVR is trained entirely using in silico system responses, avoiding reliance on external high-resolution ground-truth modalities and enabling sample-independent reconstruction across diverse biological contexts. Using fixed and live mammalian cells, we demonstrate multicolor volumetric imaging of subcellular organelles, three-dimensional tracking of autofluorescent particles, and high-speed visualization of organelle remodeling and interactions. We further extend PAVR to quantify coupled morphological and functional dynamics in beating human induced pluripotent stem cell-derived cardiomyocytes under pharmacological perturbation. Together, PAVR establishes a scalable hardware-software platform for high-throughput volumetric imaging and quantitative analysis of dynamic cellular systems in both basic and translational settings.
Lian, Y.; Adjavon, D.; Kawase, T.; Kim, J.; Fleishman, G.; Preibisch, S.; Funke, J.; Liu, Z. J.
Show abstract
Multiplexed protein imaging enables spatially resolved analysis of molecular organization in tissues, but existing spatial proteomics platforms remain constrained in scalability, throughput, and integration with RNA measurements and interpretable computational analysis. Here, we present an integrated spatial omics framework that combines highly multiplexed protein and RNA imaging with explainable machine learning to map cell-type-specific molecular and structural architectures at tissue scale. Using this platform, we simultaneously profiled up to 46 proteins and 79 RNA species across [~]370,000 cells in intact mouse brain tissue at diffraction-limited subcellular resolution ([~]260 nm). We developed a scalable, open-source computational pipeline for large-scale image processing and analysis, and show that nuclear protein and chromatin features alone are sufficient to accurately classify brain cell types and their spatial organization. Incorporation of explainable deep learning further enabled identification of human-interpretable, cell-type-specific subnuclear structural features directly from imaging data, with independent quantitative validation. Together, this integrated experimental and computational framework enables tissue-scale spatial proteomics-based cell-type classification and structural feature discovery, providing a broadly applicable platform for mechanistic studies, high-content screening, and translational applications.
Gillet, V.; Sayre, M. E.; Badalamente, G.; Schieber, N. L.; Tedore, K.; Funke, J.; Heinze, S.
Show abstract
Connectomics has become essential for the study of brain function, yet for most research groups it remains prohibitively costly in imaging time, data storage, and analysis. Here, we present an imaging, processing, and analysis pipeline for multi-resolution image acquisition and circuit reconstruction. Applied to the central complex of six insect species, we were able to obtain global projectomes at cellular resolution (40-50 nm) with embedded local connectomes describing key computational compartments at synaptic resolution (8-12 nm). We provide standardized protocols for volume EM sample preparation, image acquisition and image alignment, combined with existing methods for {micro}CT block trimming, automatic segmentation, synapse detection, collaborative skeleton tracing with CATMAID, and segmentation proofreading via CAVE. We validated our workflow by reconstructing head direction cells across all six insect species, which revealed deep conservation at the level of cell types, cell numbers and projection patterns, while also revealing circuit level specializations. Overall, our pipeline democratizes comparative connectomics by making this method accessible for small research groups with modest resources.
Yuan, L.; Zheng, Y.; Zhang, S.; Beroukhim, R.; Deshpande, A.
Show abstract
In imaging-based spatial transcriptomics, transcript-to-cell assignment shapes downstream biological interpretation including cell typing, ligand-receptor inference, and niche characterization. However, two-dimensional segmentation of volumetric tissue often yields mixed cellular profiles, while cells without detected nuclei are missed entirely, distorting the aforementioned downstream analyses. We present TRACER, which refines cellular representations in imaging-based transcriptomics by leveraging gene-gene coherence and spatial co-localization of transcripts observed directly in the data, without requiring external annotations or reference atlases. TRACER resolves mixed cellular profiles and reconstructs partial cells whose nuclei are not detected, enabling more complete representation of cells within the tissue section. We also introduce coherence-based metrics that quantify transcriptional purity and conflict, enabling platform-agnostic benchmarking of segmentation quality. Across diverse platforms, tissues, and segmentation methodologies, TRACER consistently and reproducibly improves the coherence of cellular profiles and the quality of downstream analyses.